降级扩散概率模型(DDPM)是最近获得最新结果的生成模型系列。为了获得类条件生成,建议通过从时间依赖性分类器中梯度指导扩散过程。尽管这个想法在理论上是合理的,但基于深度学习的分类器臭名昭著地容易受到基于梯度的对抗攻击的影响。因此,尽管传统分类器可能会达到良好的精度分数,但它们的梯度可能不可靠,并可能阻碍了生成结果的改善。最近的工作发现,对抗性稳健的分类器表现出与人类感知一致的梯度,这些梯度可以更好地指导生成过程,以实现语义有意义的图像。我们通过定义和训练时间依赖性的对抗性分类器来利用这一观察结果,并将其用作生成扩散模型的指导。在有关高度挑战性和多样化的Imagenet数据集的实验中,我们的方案引入了更明显的中间梯度,更好地与理论发现的一致性以及在几个评估指标下的改进的生成结果。此外,我们进行了一项意见调查,其发现表明人类评估者更喜欢我们的方法的结果。
translated by 谷歌翻译
在过去的十年中,基于深度学习的网络在包括图像分类在内的许多任务中取得了前所未有的成功。尽管取得了非凡的成就,但最近的研究表明,这种网络很容易被小小的恶意扰动(也称为对抗性例子)所愚弄。这种安全弱点导致广泛的研究旨在获得强大的模型。除了此类模型的明显鲁棒性优势之外,还观察到,它们相对于人类感知的梯度。几项作品已将感知一致的梯度(PAG)确定为强大训练的副产品,但没有人认为它是独立现象,也没有研究其自身的含义。在这项工作中,我们专注于这种特征,并测试感知一致性梯度是否暗示着稳健性。为此,我们开发了一个新颖的目标,可以直接在训练分类器中促进PAG,并检查具有此类梯度的模型是否对对抗性攻击更强大。关于CIFAR-10和STL的广泛实验验证了此类模型可以提高稳健性能,从而揭示了PAG和稳健性之间令人惊讶的双向连接。
translated by 谷歌翻译
深神经网络(DNN)对不可感知的恶意扰动高度敏感,称为对抗性攻击。在实际成像和视觉应用中发现了这种脆弱性之后,相关的安全问题引起了广泛的研究关注,并且已经开发出许多防御技术。这些防御方法中的大多数都依赖于对抗性训练(AT) - 根据特定威胁模型对图像的分类网络进行训练,该模型定义了允许修改的幅度。尽管在带来有希望的结果的情况下,对特定威胁模型的培训未能推广到其他类型的扰动。一种不同的方法利用预处理步骤从受攻击的图像中删除对抗性扰动。在这项工作中,我们遵循后一条路径,并旨在开发一种技术,从而导致在威胁模型各种实现中的强大分类器。为此,我们利用了随机生成建模的最新进展,并将其利用它们用于从条件分布中进行采样。我们的辩护依赖于在受攻击的图像中添加高斯i.i.d噪声,然后进行了预验证的扩散过程 - 一种在脱氧网络上执行随机迭代过程的体系结构,从而产生了高感知质量质量的结果。通过在CIFAR-10数据集上进行的广泛实验,通过此随机预处理步骤获得的鲁棒性得到了验证,这表明我们的方法在各种威胁模型下都优于领先的防御方法。
translated by 谷歌翻译
Unsupervised learning-based anomaly detection in latent space has gained importance since discriminating anomalies from normal data becomes difficult in high-dimensional space. Both density estimation and distance-based methods to detect anomalies in latent space have been explored in the past. These methods prove that retaining valuable properties of input data in latent space helps in the better reconstruction of test data. Moreover, real-world sensor data is skewed and non-Gaussian in nature, making mean-based estimators unreliable for skewed data. Again, anomaly detection methods based on reconstruction error rely on Euclidean distance, which does not consider useful correlation information in the feature space and also fails to accurately reconstruct the data when it deviates from the training distribution. In this work, we address the limitations of reconstruction error-based autoencoders and propose a kernelized autoencoder that leverages a robust form of Mahalanobis distance (MD) to measure latent dimension correlation to effectively detect both near and far anomalies. This hybrid loss is aided by the principle of maximizing the mutual information gain between the latent dimension and the high-dimensional prior data space by maximizing the entropy of the latent space while preserving useful correlation information of the original data in the low-dimensional latent space. The multi-objective function has two goals -- it measures correlation information in the latent feature space in the form of robust MD distance and simultaneously tries to preserve useful correlation information from the original data space in the latent space by maximizing mutual information between the prior and latent space.
translated by 谷歌翻译
The usage of technologically advanced devices has seen a boom in many domains, including education, automation, and healthcare; with most of the services requiring Internet connectivity. To secure a network, device identification plays key role. In this paper, a device fingerprinting (DFP) model, which is able to distinguish between Internet of Things (IoT) and non-IoT devices, as well as uniquely identify individual devices, has been proposed. Four statistical features have been extracted from the consecutive five device-originated packets, to generate individual device fingerprints. The method has been evaluated using the Random Forest (RF) classifier and different datasets. Experimental results have shown that the proposed method achieves up to 99.8% accuracy in distinguishing between IoT and non-IoT devices and over 97.6% in classifying individual devices. These signify that the proposed method is useful in assisting operators in making their networks more secure and robust to security breaches and unauthorized access.
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
Almost 80 million Americans suffer from hair loss due to aging, stress, medication, or genetic makeup. Hair and scalp-related diseases often go unnoticed in the beginning. Sometimes, a patient cannot differentiate between hair loss and regular hair fall. Diagnosing hair-related diseases is time-consuming as it requires professional dermatologists to perform visual and medical tests. Because of that, the overall diagnosis gets delayed, which worsens the severity of the illness. Due to the image-processing ability, neural network-based applications are used in various sectors, especially healthcare and health informatics, to predict deadly diseases like cancers and tumors. These applications assist clinicians and patients and provide an initial insight into early-stage symptoms. In this study, we used a deep learning approach that successfully predicts three main types of hair loss and scalp-related diseases: alopecia, psoriasis, and folliculitis. However, limited study in this area, unavailability of a proper dataset, and degree of variety among the images scattered over the internet made the task challenging. 150 images were obtained from various sources and then preprocessed by denoising, image equalization, enhancement, and data balancing, thereby minimizing the error rate. After feeding the processed data into the 2D convolutional neural network (CNN) model, we obtained overall training accuracy of 96.2%, with a validation accuracy of 91.1%. The precision and recall score of alopecia, psoriasis, and folliculitis are 0.895, 0.846, and 1.0, respectively. We also created a dataset of the scalp images for future prospective researchers.
translated by 谷歌翻译
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy "surrogate" algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
translated by 谷歌翻译
Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions.
translated by 谷歌翻译
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.
translated by 谷歌翻译